Talk by Tilke Judd of Tilke Judd of MIT. Given to the Redwood Center for Theoretical Neuroscience at UC Berkeley on July 23, 2010.
For many applications in graphics, design, and human computer interaction, it is essential to understand where humans look in a scene. Towards this goal I will present two different projects. The first tries to understand and model where people look on high resolution images using a data driven, machine learning approach. We collected eye tracking data of 15 viewers on 1003 images and use this database as training and testing to learn a model of saliency based on both bottom up and top down image features. The second project aims to understand where people look when an image is shown at lower resolutions. It uncovers trends answering the questions: how much image resolution is needed before fixations are consistent with fixations on high resolution images? at which resolution are viewers most consistent about where they look? The work suggests that viewers' fixations start to be very consistent at around the same time as viewers start to understand the image content--and that this can happen at very low resolution.